Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Stringent quality requirements for safety-critical applications drive the demand for “zero defects” in modern ICs. In this context, delay characterization of standard cells for resistive open defects is an increasing concern due to aggressive timing margins in digital circuits. The problem is made worse by the large number of open defect sites in standard cells, combined with a wide range of defect resistance values for each site. This incurs possible prohibitive costs for defect simulation and characterization. To alleviate this complexity, we propose Resistive Fault Dominance (RFD) for resistive open defects. RFD eliminates simulations of certain open defects with intermediate defect resistance values that are guaranteed to exceed specified timing margins for standard cells, based on tests for specific “dominant” open defects. This can significantly reduce the computational costs of cell library characterization and simulation effort by 84%-91%. An algorithmic fault simulation methodology for resistive open defects on parasitic-extracted (PEX) transistor level netlists is developed.more » « lessFree, publicly-accessible full text available September 23, 2026
- 
            Large language models (LLMs) have achieved remarkable success in natural language processing (NLP), demonstrating significant capabilities in processing and understanding text data. However, recent studies have identified limitations in LLMs’ ability to manipulate, program, and reason about structured data, especially graphs. We introduce GraphEval36K1 , the first comprehensive graph dataset, comprising 40 graph coding problems and 36,900 test cases to evaluate the ability of LLMs on graph problem solving. Our dataset is categorized into eight primary and four sub-categories to ensure a thorough evaluation across different types of graphs. We benchmark ten LLMs, finding that private models outperform open-source ones, though the gap is narrowing. We also analyze the performance of LLMs across directed vs undirected graphs, different kinds of graph concepts, and network models. Furthermore, to improve the usability of our evaluation framework, we propose Structured Symbolic Decomposition (SSD), an instruction-based method designed to enhance LLM performance on complex graph tasks. Results show that SSD improves the average passing rate of GPT-4, GPT4o, Gemini-Pro and Claude-3-Sonnet by 8.38%, 6.78%, 29.28% and 25.28%, respectively.more » « lessFree, publicly-accessible full text available April 29, 2026
- 
            Free, publicly-accessible full text available January 1, 2026
- 
            Large Language Models (LLMs) have achieved remarkable success in natural language tasks, yet understanding their reasoning processes re- mains a significant challenge. We address this by introducing XplainLLM, a dataset accom- panying an explanation framework designed to enhance LLM transparency and reliability. Our dataset comprises 24,204 instances where each instance interprets the LLM’s reasoning behavior using knowledge graphs (KGs) and graph attention networks (GAT), and includes explanations of LLMs such as the decoder- only Llama-3 and the encoder-only RoBERTa. XplainLLM also features a framework for gener- ating grounded explanations and the debugger- scores for multidimensional quality analysis. Our explanations include why-choose and why- not-choose components, reason-elements, and debugger-scores that collectively illuminate the LLM’s reasoning behavior. Our evaluations demonstrate XplainLLM’s potential to reduce hallucinations and improve grounded explana- tion generation in LLMs. XplainLLM is a re- source for researchers and practitioners to build trust and verify the reliability of LLM outputs. Our code and dataset are publicly available.more » « lessFree, publicly-accessible full text available November 12, 2025
- 
            Lifelong learning plays an important role in achieving success in one’s professional life. Engaging students in metacognition assists in the development of their lifelong learning abilities. Instructors can integrate reflection activities in their courses to provide multiple opportunities to students for metacognitive engagement. During reflection, students regulate their cognition by engaging themselves in three dimensions of metacognition: Planning, Monitoring, and Evaluating. Reflection is a complex process, and it takes time to reach the level of critical reflection. The purpose of the study was to investigate the change in students' level of engagement in three dimensions of metacognition when reflecting on the third and tenth-week assignments of the environmental engineering course. Data collection took place in the Fall of 2023 at a large Midwest University. Students’ responses to the assigned reflection prompts for each dimension were coded for their level of engagement in each element of the three dimensions using a revised prior coding scheme. Results showed that for both assignments, students' responses were mainly at the vague level for all elements of the three dimensions, indicating students' superficial engagement in the reflection activity. Recommendations for instructors are provided to improve students' understanding of the reflection activity and their level of engagement in the three dimensions of metacognition.more » « less
- 
            Free, publicly-accessible full text available March 25, 2026
- 
            Wireless links at sub-THz bands require low-SWaP SDR modems. We report early design experimentation of an SDR operating in the 130–150 GHz band, with ASK/BPSK/QPSK modulation on I/Q channels, at a maximum data rate of 128 Mbps. The design utilizes 110–170 GHz front-ends from Virginia Diodes, and Xilinx RF-SoC ZCU-111 for DSP operations. A 1 GHz baseband example at 145.5 GHz is provided. The experiment uses horn antennas with 21 dB gain. The SNR is about 40 dB without cross correlation gain in the detector which provides an additional 15 dB in link margin. Real-time bit rate of 128 Mbps is achieved. Example applications include vehicle-to-vehicle, vehicle-to-infrastructure, backhaul, device-to-aerostat. This paper provides a platform from which further design work will lead to increased data rate and/or range, and enhance security through encryption. Future designs will facilitate digital interfaces, such as, ethernet, AXI, PCIe and USB-C.more » « less
- 
            Large language models are typically aligned with human preferences by optimizing reward models (RMs) fitted to human feedback. However, human preferences are multi-faceted, and it is increasingly common to derive reward from a composition of simpler reward models which each capture a different aspect of language quality. This itself presents a challenge, as it is difficult to appropriately weight these component RMs when combining them. Compounding this difficulty, because any RM is only a proxy for human evaluation, this process is vulnerable to overoptimization, wherein past a certain point, accumulating higher reward is associated with worse human ratings. In this paper, we perform, to our knowledge, the first study on overoptimization in composite RMs, showing that correlation between component RMs has a significant effect on the locations of these points. We then introduce an approach to solve this issue using constrained reinforcement learning as a means of preventing the agent from exceeding each RM’s threshold of usefulness. Our method addresses the problem of weighting component RMs by learning dynamic weights, naturally expressed by Lagrange multipliers. As a result, each RM stays within the range at which it is an effective proxy, improving evaluation performance. Finally, we introduce an adaptive method using gradient-free optimization to identify and optimize towards these points during a single run.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available